30 research outputs found

    Trends in Phishing Attacks: Suggestions for Future Research

    Get PDF
    Deception in computer-mediated communication is a widespread phenomenon. Cyber criminals are exploiting technological mediums to communicate with potential targets as these channels reduce both the deception cues and the risk of detection itself. A prevalent deception-based attack in computer-mediated communication is phishing. Prior phishing research has addressed the “bait” and “hook” components of phishing attacks, the human-computer interaction that takes place as users judge the veracity of phishing emails and websites, and the development of technologies that can aid users in identifying and rejecting these attacks. Despite the extant research on this topic, phishing attacks continue to be successful as tactics evolve rendering existing research less relevant, and users disregard the recommendations of automated phishing tools. This paper summarizes the core of phishing research, provides an update on trending attack methods, and proposes future research addressing computer credibility in a phishing context

    Countermeasures and Eye Tracking Deception Detection

    Get PDF
    A new development in the field of deception detection is been the development of rapid, noncontact tools for automated detection. This research in progress paper describes a method for assessing the robustness of eye tracker-based deception detection to countermeasures employed by knowledgeable participants

    Real-time Embodied Agent Adaptation

    Get PDF
    This paper reports on initial investigation of two emerging technologies, FaceFX and Smartbody, capable of creating life-like animations for embodied conversational agents (ECAs) such as the AVATAR agent. Real-time rendering and animation generation technologies can enable rapid adaptation of ECAs to changing circumstances. The benefits of each package are discussed

    Man vs. machine: Investigating the effects of adversarial system use on end-user behavior in automated deception detection interviews

    Get PDF
    Deception is an inevitable component of human interaction. Researchers and practitioners are developing information systems to aid in the detection of deceptive communication. Information systems are typically adopted by end users to aid in completing a goal or objective (e.g., increasing the efficiency of a business process). However, end-user interactions with deception detection systems (adversarial systems) are unique because the goals of the system and the user are orthogonal. Prior work investigating systems-based deception detection has focused on the identification of reliable deception indicators. This research extends extant work by looking at how users of deception detection systems alter their behavior in response to the presence of guilty knowledge, relevant stimuli, and system knowledge. An analysis of data collected during two laboratory experiments reveals that guilty knowledge, relevant stimuli, and system knowledge all lead to increased use of countermeasures. The implications and limitations of this research are discussed and avenues for future research are outline

    Developing a measure of adversarial thinking in social engineering scenarios

    Get PDF
    Social engineering is a major issue for organizations. In this paper, we propose that increasing adversarial thinking can improve individual resistance to social engineering attacks. We formalize our understanding of adversarial thinking using Utility Theory. Next a measure of adversarial thinking in a text-based context. Lastly the paper reports on two studies that demonstrate the effectiveness of the newly developed measure. We show that the measure of adversarial thinking has variability, can be manipulated with training, and that it is not influenced significantly by priming. The paper also shows that social engineering training has an influence on adversarial thinking and that practicing against an adversarial conversational agent has a positive influence on adversarial thinking

    The effect of conversational agent skill on user behavior during deception

    Get PDF
    Conversational agents (CAs) are an integral component of many personal and business interactions. Many recent advancements in CA technology have attempted to make these interactions more natural and human-like. However, it is currently unclear how human-like traits in a CA impact the way users respond to questions from the CA. In some applications where CAs may be used, detecting deception is important. Design elements that make CA interactions more human-like may induce undesired strategic behaviors from human deceivers to mask their deception. To better understand this interaction, this research investigates the effect of conversational skill—that is, the ability of the CA to mimic human conversation—from CAs on behavioral indicators of deception. Our results show that cues of deception vary depending on CA conversational skill, and that increased conversational skill leads to users engaging in strategic behaviors that are detrimental to deception detection. This finding suggests that for applications in which it is desirable to detect when individuals are lying, the pursuit of more human-like interactions may be counter-productive

    Kinesic Patterning in Deceptive and Truthful Interactions

    Get PDF
    A persistent question in the deception literature has been the extent to which nonverbal behaviors can reliably distinguish between truth and deception. It has been argued that deception instigates cognitive load and arousal that are betrayed through visible nonverbal indicators. Yet, empirical evidence has often failed to find statistically significant or strong relationships. Given that interpersonal message production is characterized by a high degree of simultaneous and serial patterning among multiple behaviors, it may be that patterns of behaviors are more diagnostic of veracity. Or it may be that the theorized linkage between internal states of arousal, cognitive taxation, and efforts to control behavior and nonverbal behaviors are wrong. The current investigation addressed these possibilities by applying a software program called THEME to analyze the patterns of kinesic movements (adaptor gestures, illustrator gestures, and speaker and listener head movements) rated by trained coders for participants in a mock crime experiment. Our multifaceted analysis revealed that the quantity and quality of patterns distinguish truths from untruths. Quantitative and qualitative analyses conducted by case and condition revealed high variability in the types and complexities of patterns that were produced and differences between truthful and deceptive respondents questioned about a theft. Patterns incorporating adaptors and illustrator gestures were correlated in counterintuitive ways with arousal, cognitive load, and behavioral control, and qualitative analyses produced unique insights into truthful and untruthful communication

    Facilitating Natural Conversational Agent Interactions: Lessons from a Deception Experiment

    Get PDF
    This study reports the results of a laboratory experiment exploring interactions between humans and a conversational agent. Using the ChatScript language, we created a chat bot that asked participants to describe a series of images. The two objectives of this study were (1) to analyze the impact of dynamic responses on participants’ perceptions of the conversational agent, and (2) to explore behavioral changes in interactions with the chat bot (i.e. response latency and pauses) when participants engaged in deception. We discovered that a chat bot that provides adaptive responses based on the participant’s input dramatically increases the perceived humanness and engagement of the conversational agent. Deceivers interacting with a dynamic chat bot exhibited consistent response latencies and pause lengths while deceivers with a static chat bot exhibited longer response latencies and pause lengths. These results give new insights on social interactions with computer agents during truthful and deceptive interactions

    Examining the learning effects of live streaming video game instruction over Twitch

    Get PDF
    Technology facilitates advances in learning and drives learning paradigms. One recent innovation is Twitch™, an online streaming platform often used for video game tutorials but also enables amateur online instruction (Hamilton, Garretson, & Kerne, 2014)). Twitch represents a unique learning paradigm that is not perfectly represented in previous technologies because of its “ground-up” evolution and the opportunity for novice instructors to educate mass audiences in real-time over the Internet while enabling interaction between teachers and learners and among learners. The purpose of this research is to empirically examine the efficacy of Twitch as a learning platform by manipulating each of the key characteristics of Twitch and to understand the conditions in which novice instructors may be beneficial. Drawing from Cognitive Load Theory, we demonstrate the worked-example effect in the Twitch environment by manipulating teacher-learner-learner interactions, live versus recorded streaming, and expert-versus novice-based instruction. Based on a laboratory experiment involving 350 participants, we found that learning performance under novice instructors was at least as good as that of experts. However, an exploratory analysis of learner personalities revealed that extroverts benefit only when learner-learner interaction is enabled. Surprisingly, those who are highly agreeable and less neurotic benefited more from novice instructors

    When Disclosure is Involuntary: Empowering Users with Control to Reduce Concerns

    Get PDF
    Modern organizations must carefully balance the practice of gathering large amounts of valuable data from individuals with the associated ethical considerations and potential negative public image inherent in breaches of privacy. As it becomes increasingly commonplace for many types of information to be collected without individuals\u27 knowledge or consent, managers and researchers alike can benefit from understanding how individuals react to such involuntary disclosures, and how these reactions can impact evaluations of the data-collecting organizations. This research develops and empirically tests a theoretical model that shows how empowering individuals with a sense of control over their personal information can help mitigate privacy concerns following an invasion of privacy. Using a controlled experiment with 94 participants, we show that increasing control can reduce privacy concerns and significantly influence individuals\u27 attitudes toward the organization that has committed a privacy invasion. We discuss theoretical and practical implications of our work
    corecore